从一组校准的多视图图像中恢复详细的面部几何形状对于其广泛的应用是有价值的。传统的多视图立体声(MVS)方法采用优化方法来规范匹配成本。最近,基于学习的方法将所有这些集成到端到端的神经网络中并显示出效率的优越性。在本文中,我们提出了一种新颖的架构,以在大约10秒内恢复极其详细的3D面。与以前基于学习的方法通过3D CNN规范成本量,我们建议学习用于回归匹配成本的隐式功能。通过从多视图图像拟合3D可变模型,在网格连接的UV空间中提取和聚合多个图像的特征,这使得隐式功能在恢复详细的面部形状中更有效。我们的方法在BACESCape数据集上的大边距精确地表达了基于SOTA学习的MV。代码和数据即将发布。
translated by 谷歌翻译
Video semantic segmentation (VSS) is beneficial for dealing with dynamic scenes due to the continuous property of the real-world environment. On the one hand, some methods alleviate the predicted inconsistent problem between continuous frames. On the other hand, other methods employ the previous frame as the prior information to assist in segmenting the current frame. Although the previous methods achieve superior performances on the independent and identically distributed (i.i.d) data, they can not generalize well on other unseen domains. Thus, we explore a new task, the video generalizable semantic segmentation (VGSS) task that considers both continuous frames and domain generalization. In this paper, we propose a class-wise non-salient region generalized (CNSG) framework for the VGSS task. Concretely, we first define the class-wise non-salient feature, which describes features of the class-wise non-salient region that carry more generalizable information. Then, we propose a class-wise non-salient feature reasoning strategy to select and enhance the most generalized channels adaptively. Finally, we propose an inter-frame non-salient centroid alignment loss to alleviate the predicted inconsistent problem in the VGSS task. We also extend our video-based framework to the image-based generalizable semantic segmentation (IGSS) task. Experiments demonstrate that our CNSG framework yields significant improvement in the VGSS and IGSS tasks.
translated by 谷歌翻译
Molecular representation learning is crucial for the problem of molecular property prediction, where graph neural networks (GNNs) serve as an effective solution due to their structure modeling capabilities. Since labeled data is often scarce and expensive to obtain, it is a great challenge for GNNs to generalize in the extensive molecular space. Recently, the training paradigm of "pre-train, fine-tune" has been leveraged to improve the generalization capabilities of GNNs. It uses self-supervised information to pre-train the GNN, and then performs fine-tuning to optimize the downstream task with just a few labels. However, pre-training does not always yield statistically significant improvement, especially for self-supervised learning with random structural masking. In fact, the molecular structure is characterized by motif subgraphs, which are frequently occurring and influence molecular properties. To leverage the task-related motifs, we propose a novel paradigm of "pre-train, prompt, fine-tune" for molecular representation learning, named molecule continuous prompt tuning (MolCPT). MolCPT defines a motif prompting function that uses the pre-trained model to project the standalone input into an expressive prompt. The prompt effectively augments the molecular graph with meaningful motifs in the continuous representation space; this provides more structural patterns to aid the downstream classifier in identifying molecular properties. Extensive experiments on several benchmark datasets show that MolCPT efficiently generalizes pre-trained GNNs for molecular property prediction, with or without a few fine-tuning steps.
translated by 谷歌翻译
Adder Neural Network (AdderNet) provides a new way for developing energy-efficient neural networks by replacing the expensive multiplications in convolution with cheaper additions (i.e.l1-norm). To achieve higher hardware efficiency, it is necessary to further study the low-bit quantization of AdderNet. Due to the limitation that the commutative law in multiplication does not hold in l1-norm, the well-established quantization methods on convolutional networks cannot be applied on AdderNets. Thus, the existing AdderNet quantization techniques propose to use only one shared scale to quantize both the weights and activations simultaneously. Admittedly, such an approach can keep the commutative law in the l1-norm quantization process, while the accuracy drop after low-bit quantization cannot be ignored. To this end, we first thoroughly analyze the difference on distributions of weights and activations in AdderNet and then propose a new quantization algorithm by redistributing the weights and the activations. Specifically, the pre-trained full-precision weights in different kernels are clustered into different groups, then the intra-group sharing and inter-group independent scales can be adopted. To further compensate the accuracy drop caused by the distribution difference, we then develop a lossless range clamp scheme for weights and a simple yet effective outliers clamp strategy for activations. Thus, the functionality of full-precision weights and the representation ability of full-precision activations can be fully preserved. The effectiveness of the proposed quantization method for AdderNet is well verified on several benchmarks, e.g., our 4-bit post-training quantized adder ResNet-18 achieves an 66.5% top-1 accuracy on the ImageNet with comparable energy efficiency, which is about 8.5% higher than that of the previous AdderNet quantization methods.
translated by 谷歌翻译
We discuss two kinds of semantics relevant to Computer Vision (CV) systems - Visual Semantics and Lexical Semantics. While visual semantics focus on how humans build concepts when using vision to perceive a target reality, lexical semantics focus on how humans build concepts of the same target reality through the use of language. The lack of coincidence between visual and lexical semantics, in turn, has a major impact on CV systems in the form of the Semantic Gap Problem (SGP). The paper, while extensively exemplifying the lack of coincidence as above, introduces a general, domain-agnostic methodology to enforce alignment between visual and lexical semantics.
translated by 谷歌翻译
In this paper, we present ExtremeBERT, a toolkit for accelerating and customizing BERT pretraining. Our goal is to provide an easy-to-use BERT pretraining toolkit for the research community and industry. Thus, the pretraining of popular language models on customized datasets is affordable with limited resources. Experiments show that, to achieve the same or better GLUE scores, the time cost of our toolkit is over $6\times$ times less for BERT Base and $9\times$ times less for BERT Large when compared with the original BERT paper. The documentation and code are released at https://github.com/extreme-bert/extreme-bert under the Apache-2.0 license.
translated by 谷歌翻译
我们为腿部机器人提供了一个开源视觉惯性训练率(VILO)状态估计解决方案Cerberus,该机器人使用一组标准传感器(包括立体声摄像机,IMU,联合编码器,,imu,联合编码器)实时实时估算各个地形的位置和接触传感器。除了估计机器人状态外,我们还执行在线运动学参数校准并接触离群值拒绝以大大减少位置漂移。在各种室内和室外环境中进行的硬件实验验证了Cerberus中的运动学参数可以将估计的漂移降低到长距离高速运动中的1%以下。我们的漂移结果比文献中报道的相同的一组传感器组比任何其他状态估计方法都要好。此外,即使机器人经历了巨大的影响和摄像头遮挡,我们的状态估计器也表现良好。状态估计器的实现以及用于计算我们结果的数据集,可在https://github.com/shuoyangrobotics/cerberus上获得。
translated by 谷歌翻译
在移动设备上部署机器学习模型已引起越来越多的关注。为了解决设备上硬件资源的局限性解决模型概括问题,设备模型需要通过诸如云模型的模型压缩等技术轻量级。但是,改善设备模型概括的主要障碍是云数据和设备模型数据之间的分布变化,因为设备模型上的数据分布通常会随着时间而变化(例如,用户在建议系统中可能具有不同的偏好)。尽管实时微调和蒸馏方法考虑到了这种情况,但这些方法需要进行设备训练,由于计算能力较低和设备上缺乏实时标记样品,因此实际上是不可行的。在本文中,我们提出了一个名为Metanetwork的新型任务无关框架,用于从云中生成自适应设备模型参数,而无需进行设备训练。具体而言,我们的元网络部署在云上,由元培养剂和转移器模块组成。 Metagenerator旨在学习从样本到模型参数的映射函数,并且可以根据从设备上传到云的样本生成和传递自适应参数到设备。转移剂旨在减少元烯剂的振荡,加速收敛并在训练和推理过程中提高模型性能。我们使用三个数据集评估了两个任务的方法。广泛的实验表明,元网可以以不同的方式实现竞争性能。
translated by 谷歌翻译
机器学习模型容易受到会员推理攻击的影响,在这种攻击中,对手的目的是预测目标模型培训数据集中是否包含特定样本。现有的攻击方法通常仅从给定的目标模型中利用输出信息(主要是损失)。结果,在成员和非成员样本都产生类似小损失的实际情况下,这些方法自然无法区分它们。为了解决这一限制,在本文中,我们提出了一种称为\系统的新攻击方法,该方法可以利用目标模型的整个培训过程中的成员资格信息来改善攻击性能。要将攻击安装在共同的黑盒环境中,我们利用知识蒸馏,并通过在不同蒸馏时期的中间模型中评估的损失表示成员资格信息,即\ emph {蒸馏损失轨迹},以及损失来自给定的目标模型。对不同数据集和模型体系结构的实验结果证明了我们在不同指标方面的攻击优势。例如,在Cinic-10上,我们的攻击至少达到6 $ \ times $ $阳性的速率,低阳性率为0.1 \%的速率比现有方法高。进一步的分析表明,在更严格的情况下,我们攻击的总体有效性。
translated by 谷歌翻译
机载激光扫描(ALS)点云的分类是遥感和摄影测量场的关键任务。尽管最近基于深度学习的方法取得了令人满意的表现,但他们忽略了接受场的统一性,这使得ALS点云分类对于区分具有复杂结构和极端规模变化的区域仍然具有挑战性。在本文中,为了配置多受感受性的场特征,我们提出了一个新型的接受场融合和分层网络(RFFS-NET)。以新颖的扩张图卷积(DGCONV)及其扩展环形扩张卷积(ADCONV)作为基本的构建块,使用扩张和环形图融合(Dagfusion)模块实现了接受场融合过程,该模块获得了多受感染的场特征代表通过捕获带有各种接收区域的扩张和环形图。随着计算碱基的计算基础,使用嵌套在RFFS-NET中的多级解码器进行的接收场的分层,并由多层接受场聚集损失(MRFALOSS)驱动,以驱动网络驱动网络以学习在具有不同分辨率的监督标签的方向。通过接受场融合和分层,RFFS-NET更适应大型ALS点云中具有复杂结构和极端尺度变化区域的分类。在ISPRS Vaihingen 3D数据集上进行了评估,我们的RFFS-NET显着优于MF1的基线方法5.3%,而MIOU的基线方法的总体准确性为82.1%,MF1的总准确度为71.6%,MIOU的MF1和MIOU为58.2%。此外,LASDU数据集和2019 IEEE-GRSS数据融合竞赛数据集的实验显示,RFFS-NET可以实现新的最新分类性能。
translated by 谷歌翻译